Bump libcuopt size to 775MiB#303
Conversation
jameslamb
left a comment
There was a problem hiding this comment.
Why do we "need" to increase this?
The latest libcuopt wheel build succeeded, and the libcuopt-cu12 wheels were around 560 MB, for both x86_64 and arm64
----- package inspection summary -----
file size
* compressed size: 0.561G
* uncompressed size: 0.86G
* compression space saving: 34.8%
contents
Around 140 MB of extra space should be PLENTY to not disrupt development here. I honestly would even recommend decreasing this to something like 625M to be notified sooner of unexpected growth in the package sizes.
| # PyPI limit is 700 MiB, fail CI before we get too close to that | ||
| # 11.X size is 300M compressed and 12.x size is 600M compressed | ||
| max_allowed_size_compressed = '700M' | ||
| max_allowed_size_compressed = '775M' |
There was a problem hiding this comment.
The comment above this is not accurate. There is no "PyPI limit" for this project... libcuopt-cu12 wheels are not published to pypi.org, only to pypi.nvidia.com (which does not have size limits).
That comment should be rewritten to something simple and future-proof, like this:
# detect when package size grows significantly
max_allowed_size_compressed = '500M'
jameslamb
left a comment
There was a problem hiding this comment.
Approving this. Please change the title to something like:
Bump libcuopt size limit to 775MiB
I think we discovered offline that it's not papilo mainly adding to this, but the expanded set of GPU architectures: rapidsai/rapids-cmake#897
|
/merge |
Description
New architectures were added to support new chips, which increased the size of the wheel above the previous threshold, so increasing the threshold to accommodate the change.
Issue
Checklist